Continuous-Time Attention for Sequential Learning

نویسندگان

چکیده

Attention mechanism is crucial for sequential learning where a wide range of applications have been successfully developed. This basically trained to spotlight on the region interest in hidden states sequence data. Most attention methods compute score through relating between query and discrete-time state trajectory represented. Such could not directly attend continuous-time which represented via neural differential equation (NDE) combined with recurrent network. paper presents new method tightly integrated NDE construct an attentive machine. The performed at all times over different kinds irregular time signals. missing information data due sampling loss, especially presence long sequence, can be seamlessly compensated attended representation. experiments samples from human activities, dialogue sentences medical features show merits proposed activity recognition, sentiment classification mortality prediction, respectively.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Stable Rough Extreme Learning Machines for the Identification of Uncertain Continuous-Time Nonlinear Systems

‎Rough extreme learning machines (RELMs) are rough-neural networks with one hidden layer where the parameters between the inputs and hidden neurons are arbitrarily chosen and never updated‎. ‎In this paper‎, ‎we propose RELMs with a stable online learning algorithm for the identification of continuous-time nonlinear systems in the presence of noises and uncertainties‎, ‎and we prove the global ...

متن کامل

Continuous-time Sequential Decision Feedback: Revisited

Sequential feedback communications has wide ranging applications such as low power communications, error-resilience protocols etc. Two kinds of feedback communication systems can be indentified: information feedback and decision feedback. Continuous-time sequential decision feedback communication is the focus of this paper. The case when the detector test statistic is a Poisson random walk proc...

متن کامل

Learning to Reject Sequential Importance Steps for Continuous-Time Bayesian Networks

Applications of graphical models often require the use of approximate inference, such as sequential importance sampling (SIS), for estimation of the model distribution given partial evidence, i.e., the target distribution. However, when SIS proposal and target distributions are dissimilar, such procedures lead to biased estimates or require a prohibitive number of samples. We introduce ReBaSIS,...

متن کامل

A continuous time neural model for sequential action : Supplemental Information ∗

The Leabra framework is described in detail in O’Reilly and Munakata (2000); O’Reilly, Munakata, Frank, Hazy, and Contributors (2013) and O’Reilly (2001), and summarized here. The standard Leabra equations have been used to simulate over 40 different models in O’Reilly and Munakata (2000) and a number of other research models. Thus, the model can be viewed as an instantiation of a systematic mo...

متن کامل

A continuous-time neural model for sequential action.

Action selection, planning and execution are continuous processes that evolve over time, responding to perceptual feedback as well as evolving top-down constraints. Existing models of routine sequential action (e.g. coffee- or pancake-making) generally fall into one of two classes: hierarchical models that include hand-built task representations, or heterarchical models that must learn to repre...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2021

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v35i8.16875